Recently the focus of the computer vision community has shifted from expensive supervised learning towards self-supervised learning of visual representations. While the performance gap between supervised and self-supervised has been narrowing, the time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts, which hinders progress, imposes carbon cost, and limits societal benefits to institutions with substantial resources. Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods by various model-agnostic strategies that have not been used for this problem. In particular, we study three strategies: an extendable cyclic learning rate schedule, a matching progressive augmentation magnitude and image resolutions schedule, and a hard positive mining strategy based on augmentation difficulty. We show that all three methods combined lead up to 2.7 times speed-up in the training time of several self-supervised methods while retaining comparable performance to the standard self-supervised learning setting.
translated by 谷歌翻译
已知最近的清晰度感知最小化(SAM)可以找到平坦的最小值,这有助于改善稳健性。 Sam通过报告当前迭代周围的小社区内的最大损失值来修改损失函数。但是,它使用欧几里得球来定义邻域,这可能是不准确的,因为神经网络的损失函数通常是根据概率分布(例如类预测概率)定义的,从而使参数空间空间非欧几里得。在本文中,我们在定义邻里时考虑了模型参数空间的信息几何形状,即用Fisher信息引起的椭圆形取代Sam的欧几里得球。我们称为Fisher Sam的方法定义了符合基础统计歧管的内在度量的更准确的邻域结构。例如,由于我们的Fisher Sam避免了参数空间几何形状,因此SAM可能会在附近或不当远处探测最坏情况下的损失值。最近,另一种自适应SAM方法会根据参数幅度的规模拉伸/收缩欧几里得球。这可能是危险的,有可能破坏邻里结构。我们证明了在几个基准数据集/任务上提出的Fisher SAM的性能提高。
translated by 谷歌翻译
自我监督的学习是一个强大的范例,用于在未标记的图像上学习。基于实例匹配的大量有效的新方法依赖于数据增强来推动学习,这些方法达成了优化流行识别基准的增强方案的粗略协议。但是,有强有力的理由可疑计算机视觉中的不同任务需要对不同(IN)差异进行编码的功能,因此可能需要不同的增强策略。在本文中,我们衡量了对比方法学到的修正学知识,并确认他们确实学会了与使用的增强的不变性,进一步表明,这一不变性大大转移到与姿势和照明的相关真实变化的变化很大程度上转移。我们展示了学习的InorRARCES强烈影响下游任务性能,并确认不同的下游任务从极性相反(IN)差异中受益,导致使用标准增强策略时的性能损失。最后,我们证明,具有互补的修正条件的表现简单融合可确保对所考虑的所有不同下游任务进行广泛的可转换性。
translated by 谷歌翻译
在本文中,我们提出了第一次尝试无监督的SBIR来删除常规培训所需的标签成本(类别注释和素描 - 光配对)。由于该问题的独特跨域(草图和照片)性质,现有的单域无监督表示学习方法在本应用程序中的性能很差。因此,我们介绍了一个新型框架,该框架同时执行了无监督的表示学习和素描域的对准。从技术上讲,这是通过利用联合分配最佳运输(JDOT)来对齐的,以使来自不同领域的数据在表示过程中对齐,我们将其扩展到可训练的群集原型和功能记忆库以进一步提高可扩展性和功效。广泛的实验表明,我们的框架在新的无监督环境中取得了出色的性能,并且在零拍设置中的性能比最先进的表现相当或更好。
translated by 谷歌翻译
Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.
translated by 谷歌翻译
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. Furthermore, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems. This shows that DG training can benefit standard practice in computer vision.
translated by 谷歌翻译
We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.
translated by 谷歌翻译
The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them.In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.
translated by 谷歌翻译
Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful large network or ensemble to a small network, that is better suited to low-memory or fast execution requirements. In this paper, we present a deep mutual learning (DML) strategy where, rather than one way transfer between a static pre-defined teacher and a student, an ensemble of students learn collaboratively and teach each other throughout the training process. Our experiments show that a variety of network architectures benefit from mutual learning and achieve compelling results on CIFAR-100 recognition and Market-1501 person re-identification benchmarks. Surprisingly, it is revealed that no prior powerful teacher network is necessary -mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher.
translated by 谷歌翻译
This paper proposes a perception and path planning pipeline for autonomous racing in an unknown bounded course. The pipeline was initially created for the 2021 evGrandPrix autonomous division and was further improved for the 2022 event, both of which resulting in first place finishes. Using a simple LiDAR-based perception pipeline feeding into an occupancy grid based expansion algorithm, we determine a goal point to drive. This pipeline successfully achieved reliable and consistent laps in addition with occupancy grid algorithm to know the ways around a cone-defined track with an averaging speeds of 6.85 m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
translated by 谷歌翻译